Handling Model Uncertainty and Multiplicity in Explanations via Model Reconciliation
نویسندگان
چکیده
Model reconciliation has been proposed as a way for an agent to explain its decisions to a human who may have a different understanding of the same planning problem by explaining its decisions in terms of these model differences. However, often the human’s mental model (and hence the difference) is not known precisely and such explanations cannot be readily computed. In this paper, we show how the explanation generation process evolves in the presence of such model uncertainty or incompleteness by generating conformant explanations that are applicable to a set of possible models. We also show how such explanations can contain superfluous information, and how we can reduce such redundancies using conditional explanations to iterate with the human to attain common ground. Finally, we will introduce an anytime approach to this problem, and empirically demonstrate the trade-offs involved in the different forms of explanations in terms of the computational overhead for the agent and the communication overhead for the human. We illustrate these concepts in three well-known planning domains as well as in a demonstration on a robot involved in a typical search and reconnaissance scenario with an external human supervisor. In (Chakraborti et al. 2017) it was shown how a robot can explain its decisions to a human in the loop who might have a different understanding of the same problem (either in terms of the agent’s knowledge or intentions, or in terms of its capabilities). These explanations are intended to bring the human’s mental model closer to the robot’s estimation of the ground truth – this is referred to this as the model reconciliation process, by the end of which a plan that is optimal in the robot’s model is also estimated to be optimal in the human’s updated mental model. It was also shown how this process can be achieved successfully while transferring the minimum number of model updates possible via what are called minimally complete explanations or MCEs. Explanations of this form have been inspired by work (Lombrozo 2006; 2012; Miller 2017) which identify properties of explanations in terms of selectivity, contrastiveness and mental modeling of the explainee. Such techniques can thus be essential contributors to the dynamics of trust and teamwork in human-agent collaborations by significantly lowering the communication overhead between agents while Copyright c © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: The model reconciliation process in case of model uncertainty or multiple explainees. at the same time providing the right amount of information to keep the agents on the same page with respect to their understanding of each others’ tasks and capabilities – thereby reducing the cognitive burden on the human teammates and increasing their situational awareness. This process of model reconciliation is illustrated in Figure 1. The robot’s model, which is its estimate of the ground truth, is represented by M (note: “model” of a planning problem includes the state and goals information as well as the domain or action model) and π∗ MR is the optimal plan in it. A human H who is interacting with it may have a different model Mh of the same planning problem, and the optimal plan π∗ Mh in the human’s model can diverge from that of the robot’s leading to the robot needing to explain its decision to the human. As explained above, an explanation is an update or correction to the human’s mental model to a new intermediate model M̂h where (according to cost or some other suitable measure of similarity) the optimal plan π∗ M̂h is equivalent to the original plan π∗ MR . However, this process is only feasible if inconsistencies of the robot’s model with the human’s mental model is known precisely. Authors in (Chakraborti et al. 2017) make this assumption, which is often hard to realize in practice. Instead, the agent may end up having to explain its decisions with respect to a set of possible models which is its best estimation of the human’s knowledge state learned in the process of interactions (Nguyen, Sreedharan, and Kambhampati 2017; Bryce, Benton, and Boldt 2016). In such a situation, the robot can, of course, call upon the previously mentioned services to compute MCEs for each possible configuration. However, this can result in situations where the explanations computed for individual models independently are not consistent across all the possible target domains. In the case of model uncertainty, such an approach cannot guarantee that the resulting explanation will be an acceptable explanation in the real domain. Instead, we want to find an explanation such that ∀i π∗ M̂Rhi ≡ π∗ MR . This is a single model update that makes the given plan optimal (and hence explained) in all the updated domains (or in all possible domains). At first glance, it appears that such an approach, even though desirable, might turn out to be prohibitively expensive especially since solving for a single MCE involves search in the model space where each search node is an optimal planning problem. However, it turns out that the exact same search strategy can be employed here as well by modifying the way in which the models are represented and the equivalence criterion is computed during the search process. Thus, in this paper, we – (1) show how uncertainty over the human mental model can be represented in the form of annotated models; (2) outline how the concept of an MCE becomes one of conformant explanations in the revised setting and the search for these can be compiled to the original MCE search; (3) show how superfluous information in conformant explanations can be reduced interactively via conditional explanations which can be computed in an anytime manner; (4) demonstrate how the model reconciliation process in the presence of multiple humans in the loop can be viewed as a special case of uncertain models; and finally (5) illustrate these concepts on a typical search and reconnaissance setting as well as with empirical results on a few well-known benchmark planning domains.
منابع مشابه
A Complex Design of the Integrated Forward-Reverse Logistics Network under Uncertainty
Design of a logistics network in proper way provides a proper platform for efficient and effective supply chain management. This paper studies a multi-period, multi echelon and multi-product integrated forward-reverse logistics network under uncertainty. First, an efficient complex mixed-integer linear programming (MILP) model by considering some real-world assumptions is developed for the inte...
متن کاملOptimization of the Microgrid Scheduling with Considering Contingencies in an Uncertainty Environment
In this paper, a stochastic two-stage model is offered for optimization of the day-ahead scheduling of the microgrid. System uncertainties including dispatchable distributed generation and energy storage contingencies are considered in the stochastic model. For handling uncertainties, Monte Carlo simulation is employed for generation several scenarios and then a reduction method is used to decr...
متن کاملA Robust Model for a Dynamic Cellular Manufacturing System with Production Planning
In this paper, a robust optimization approach is proposed to design a dynamic cellular manufacturing system (DCMS) under uncertainty of processing time of products. In addition, a mathematical model considering cell formation, inter-cell design and production planning under a dynamic environment (i.e., product mix and demand are changed in each period) is presented. Therefore, reconfiguration b...
متن کاملA novel radial super-efficiency DEA model handling negative data
Super-efficiency model in the presence of negative data is a relatively neglected issue in the DEA field. The existing super-efficiency models have some shortcomings in practice. In this paper, a novel VRS radial super-efficiency DEA model based on Directional Distance Function (DDF) is proposed to provide a complete ranking order of units (including efficient and inefficient ones). The propose...
متن کاملA Robust Reliable Closed Loop Supply Chain Network Design under Uncertainty: A Case Study in Equipment Training Centers
The aim of this paper is to propose a robust reliable bi-objective supply chain network design (SCND) model that is capable of controlling different kinds of uncertainties, concurrently. In this regard, stochastic bi-level scenario based programming approach which is used to model various scenarios related to strike of disruptions. The well-known method helps to overcome adverse effects of disr...
متن کامل